14 research outputs found

    2-Step Scalar Deadzone Quantization for Bitplane Image Coding

    Full text link

    Piecewise mapping in HEVC lossless intra-prediction coding

    Get PDF
    The lossless intra-prediction coding modality of the High Efficiency Video Coding (HEVC) standard provides high coding performance while following frame-by-frame basis access to the coded data. This is of interest in many professional applications such as medical imaging, automotive vision and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using Differential Pulse Code Modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. Specifically, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4×4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain maximum bit-rate reductions of 5.54% and 28.33% for screen content material compared to DPCM-based and block-wise intra-prediction, respectively. Compared to IntraBlock Copy, piecewise mapping can attain maximum bit-rate reductions of 11.48% for camera-captured material

    Stationary Probability Model for Microscopic Parallelism in JPEG2000

    Full text link

    Visually Lossless Strategies to Decode and Transmit JPEG2000 Imagery

    Full text link

    Accelerating BPC-PaCo through visually lossless techniques

    Get PDF
    Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo

    Bitplane image coding with parallel coefficient processing

    Get PDF
    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible

    Bitplane Image Coding With Parallel Coefficient Processing

    Full text link

    Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

    No full text
    <p/> <p>Quality scalability is an important feature of image and video coding systems. In JPEG2000, quality scalability is achieved through the use of quality layers that are formed in the encoder through rate-distortion optimization techniques. Quality layers provide optimal rate-distortion representations of the image when the codestream is transmitted and/or decoded at layer boundaries. Nonetheless, applications such as interactive image transmission, video streaming, or transcoding demand layer fragmentation. The common approach to truncate layers is to keep the initial prefix of the to-be-truncated layer, which may greatly penalize the quality of decoded images, especially when the layer allocation is inadequate. So far, only one method has been proposed in the literature providing enhanced quality scalability for compressed JPEG2000 imagery. However, that method provides quality scalability at the expense of high computational costs, which prevents its application to the aforementioned applications. This paper introduces a Block-Wise Layer Truncation (BWLT) that, requiring negligible computational costs, enhances the quality scalability of compressed JPEG2000 images. The main insight behind BWLT is to dismantle and reassemble the to-be-fragmented layer by selecting the most relevant codestream segments of codeblocks within that layer. The selection process is conceived from a rate-distortion model that finely estimates rate-distortion contributions of codeblocks. Experimental results suggest that BWLT achieves near-optimal performance even when the codestream contains a single quality layer.</p

    FAST Rate Allocation for JPEG2000 Video Transmission Over Time-Varying Channels

    No full text

    Transform optimization for the Lossy Coding of pathology whole-slide images

    No full text
    Whole-slide images (WSIs) are high-resolution, 2D, color digital images that are becoming valuable tools for pathologists in clinical, research and formative scenarios. However, their massive size is hindering their widespread adoption. Even though lossy compression can effectively reduce compressed file sizes without affecting subsequent diagnoses, no lossy coding scheme tailored for WSIs has been described in the literature. In this paper, a novel strategy called OptimizeMCT is proposed to increase the lossy coding performance for this type of images. In particular, an optimization method is designed to find image-specific multicomponent transforms (MCTs) that exploit the high inter-component correlation present in WSIs. Experimental evidence indicates that the transforms yielded by OptimizeMCT consistently attain better coding performance than the Karhunen-Lo`eve Transform (KLT) for all tested lymphatic, pancreatic and renal WSIs. More specifically, images reconstructed at the same bitrate exhibit average PSNR values 2.85 dB higher for OptimizeMCT than for the KLT, with differences of up to 5.17 dB
    corecore